57 research outputs found

    Sequential Convex Programming Methods for A Class of Structured Nonlinear Programming

    Full text link
    In this paper we study a broad class of structured nonlinear programming (SNLP) problems. In particular, we first establish the first-order optimality conditions for them. Then we propose sequential convex programming (SCP) methods for solving them in which each iteration is obtained by solving a convex programming problem exactly or inexactly. Under some suitable assumptions, we establish that any accumulation point of the sequence generated by the methods is a KKT point of the SNLP problems. In addition, we propose a variant of the exact SCP method for SNLP in which nonmonotone scheme and "local" Lipschitz constants of the associated functions are used. And a similar convergence result as mentioned above is established.Comment: This paper has been withdrawn by the author due to major revision and correction

    Adaptive First-Order Methods for General Sparse Inverse Covariance Selection

    Full text link
    In this paper, we consider estimating sparse inverse covariance of a Gaussian graphical model whose conditional independence is assumed to be partially known. Similarly as in [5], we formulate it as an l1l_1-norm penalized maximum likelihood estimation problem. Further, we propose an algorithm framework, and develop two first-order methods, that is, the adaptive spectral projected gradient (ASPG) method and the adaptive Nesterov's smooth (ANS) method, for solving this estimation problem. Finally, we compare the performance of these two methods on a set of randomly generated instances. Our computational results demonstrate that both methods are able to solve problems of size at least a thousand and number of constraints of nearly a half million within a reasonable amount of time, and the ASPG method generally outperforms the ANS method.Comment: 19 pages, 1 figur

    An Augmented Lagrangian Approach for Sparse Principal Component Analysis

    Full text link
    Principal component analysis (PCA) is a widely used technique for data analysis and dimension reduction with numerous applications in science and engineering. However, the standard PCA suffers from the fact that the principal components (PCs) are usually linear combinations of all the original variables, and it is thus often difficult to interpret the PCs. To alleviate this drawback, various sparse PCA approaches were proposed in literature [15, 6, 17, 28, 8, 25, 18, 7, 16]. Despite success in achieving sparsity, some important properties enjoyed by the standard PCA are lost in these methods such as uncorrelation of PCs and orthogonality of loading vectors. Also, the total explained variance that they attempt to maximize can be too optimistic. In this paper we propose a new formulation for sparse PCA, aiming at finding sparse and nearly uncorrelated PCs with orthogonal loading vectors while explaining as much of the total variance as possible. We also develop a novel augmented Lagrangian method for solving a class of nonsmooth constrained optimization problems, which is well suited for our formulation of sparse PCA. We show that it converges to a feasible point, and moreover under some regularity assumptions, it converges to a stationary point. Additionally, we propose two nonmonotone gradient methods for solving the augmented Lagrangian subproblems, and establish their global and local convergence. Finally, we compare our sparse PCA approach with several existing methods on synthetic, random, and real data, respectively. The computational results demonstrate that the sparse PCs produced by our approach substantially outperform those by other methods in terms of total explained variance, correlation of PCs, and orthogonality of loading vectors.Comment: 42 page

    Schatten-pp Quasi-Norm Regularized Matrix Optimization via Iterative Reweighted Singular Value Minimization

    Full text link
    In this paper we study general Schatten-pp quasi-norm (SPQN) regularized matrix minimization problems. In particular, we first introduce a class of first-order stationary points for them, and show that the first-order stationary points introduced in [11] for an SPQN regularized vectorvector minimization problem are equivalent to those of an SPQN regularized matrixmatrix minimization reformulation. We also show that any local minimizer of the SPQN regularized matrix minimization problems must be a first-order stationary point. Moreover, we derive lower bounds for nonzero singular values of the first-order stationary points and hence also of the local minimizers of the SPQN regularized matrix minimization problems. The iterative reweighted singular value minimization (IRSVM) methods are then proposed to solve these problems, whose subproblems are shown to have a closed-form solution. In contrast to the analogous methods for the SPQN regularized vectorvector minimization problems, the convergence analysis of these methods is significantly more challenging. We develop a novel approach to establishing the convergence of these methods, which makes use of the expression of a specific solution of their subproblems and avoids the intricate issue of finding the explicit expression for the Clarke subdifferential of the objective of their subproblems. In particular, we show that any accumulation point of the sequence generated by the IRSVM methods is a first-order stationary point of the problems. Our computational results demonstrate that the IRSVM methods generally outperform some recently developed state-of-the-art methods in terms of solution quality and/or speed.Comment: This paper has been withdrawn by the author due to major revision and correction

    Computing Optimal Experimental Designs via Interior Point Method

    Full text link
    In this paper, we study optimal experimental design problems with a broad class of smooth convex optimality criteria, including the classical A-, D- and p th mean criterion. In particular, we propose an interior point (IP) method for them and establish its global convergence. Furthermore, by exploiting the structure of the Hessian matrix of the aforementioned optimality criteria, we derive an explicit formula for computing its rank. Using this result, we then show that the Newton direction arising in the IP method can be computed efficiently via Sherman-Morrison-Woodbury formula when the size of the moment matrix is small relative to the sample size. Finally, we compare our IP method with the widely used multiplicative algorithm introduced by Silvey et al. [29]. The computational results show that the IP method generally outperforms the multiplicative algorithm both in speed and solution quality
    • …
    corecore